在本文中,我们提出了具有能量和动量的随机梯度的SGEM,以基于起源于工作[AEGD:适应性梯度下降的能量下降的AEGD方法,以解决一大批一般的非凸随机优化问题。ARXIV:2010.05109]。SGEM同时结合了能量和动量,以继承其双重优势。我们表明,SGEM具有无条件的能量稳定性,并在一般的非convex随机设置中得出能量依赖性收敛速率,以及在线凸台设置中的遗憾。还提供了能量变量的较低阈值。我们的实验结果表明,SGEM的收敛速度比AEGD快,并且至少在训练某些深层神经网络方面概述了SGDM。
translated by 谷歌翻译
在这封信中,我们提出了一个可靠的实时,实时的,惯性导航系统(INS) - 中心的GNSS-视觉惯性导航系统(IC-GVIN),用于轮式机器人,其中在两个状态估计中都可以完全利用精确的INS和视觉过程。为了改善系统的鲁棒性,通过严格的离群策略,在整个基于关键帧的视觉过程中采用了INS信息。采用GNSS来执行IC-GVIN的准确和方便的初始化,并进一步用于在大规模环境中实现绝对定位。 IMU,Visual和GNSS测量值紧密地融合在因子图优化的框架内。进行了专用的实验,以评估轮式机器人上IC-GVIN的鲁棒性和准确性。 IC-GVIN在带有移动对象的各种视觉降低场景中表现出卓越的鲁棒性。与最先进的视觉惯性导航系统相比,所提出的方法在各种环境中都能提高鲁棒性和准确性。我们开源的代码与GitHub上的数据集结合在一起
translated by 谷歌翻译
在线新闻建议的一个关键挑战是帮助用户找到他们感兴趣的文章。传统新闻推荐方法通常使用单一新闻信息,这不足以编码新闻和用户表示。最近的研究使用多个频道新闻信息,例如标题,类别和机构,增强新闻和用户表示。然而,这些方法仅使用各种注意机制来熔化多视图嵌入,而不考虑上下文中包含的深度挖掘更高级别的信息。这些方法编码了在Word级别的新闻内容并共同培训了推荐网络中的注意参数,导致培训模型所需的更多Coreas。我们提出了一个事件提取的新闻推荐(EENR)框架,以克服这些缺点,利用事件提取到抽象的更高级别信息。 Eenr还使用两级策略来减少推荐网络后续部分的参数。我们在第一阶段通过外部语料库训练事件提取模块,并将训练型模型应用于新闻推荐数据集,以预测第二阶段的事件级信息,包括事件类型,角色和参数,包括事件类型,角色和参数。然后我们保险熔断多个频道信息,包括活动信息,新闻标题和类别,以编码新闻和用户。对现实世界数据集的广泛实验表明,我们的EENR方法可以有效地提高新闻建议的性能。最后,我们还探讨了利用更高抽象级别信息来替代新闻身体内容的合理性。
translated by 谷歌翻译
社交媒体平台可能为包含仇恨语音的话语提供潜在的空间,甚至更糟糕,可以充当仇恨犯罪的传播机制。联邦调查局的统一犯罪报告(UCR)计划收集仇恨犯罪数据并每年发布统计报告。这些统计数据提供了确定国家仇恨犯罪趋势的信息。统计数据还可以为执法机构提供有价值的整体和战略洞察力,或证明法律制造者为具体的立法。但是,该报告主要在明年发布,落后于许多即时需求。最近的研究主要侧重于社会媒体文本或对确诊犯罪影响的实证研究中的仇恨语音检测。本文提出了一个框架,首先利用文本采矿技术从纽约时报新闻中提取仇恨犯罪事件,然后利用结果促进预测美国国家一级和国家级仇恨犯罪趋势。实验结果表明,随着时间序列或回归方法,我们的方法可以显着提高预测性能,而无需事件相关的因素。我们的框架拓宽了国家级和国家级仇恨犯罪趋势预测的方法。
translated by 谷歌翻译
当前的文本分类方法通常仅在幼稚或复杂的分类器之前将文本编码为嵌入,该分类器忽略了标签文本中包含的建议信息。实际上,人类主要基于子类别的语义含义对文档进行分类。我们通过暹罗伯特(Siamese Bert)和名为Ideas(交互式双重注意力)的交互式双重注意提出了一种新颖的模型结构,以捕获文本和标签名称的信息交换。交互式双重注意力使该模型能够从粗糙到细小的类中开利的类和类内部信息,这涉及区分所有标签并匹配地面真实标签的语义子类。我们提出的方法的表现优于最新方法,使用标签文本显着,结果更稳定。
translated by 谷歌翻译
数据增强技术被广泛用于文本分类任务中,以提高分类器的性能,尤其是在低资源场景中。大多数以前的方法都会进行文本增强,而无需考虑文本中单词的不同功能,这可能会产生不令人满意的样本。不同的单词可能在文本分类中扮演不同的角色,这激发了我们战略性地选择文本增强作用的适当角色。在这项工作中,我们首先从统计相关性和语义相似性的角度来确定文本中的单词与文本类别之间的关系,具有不同的文本分类功能。基于这些单词角色,我们提出了一种称为STA(选择性文本增强)的新的增强技术,其中不同的文本编辑操作被选择性地应用于具有特定角色的单词。 STA可以在保留原始核心语义的同时生成多样化和相对干净的样品,并且也很容易实现。 5个基准低资源文本分类数据集进行的大量实验表明,STA生产的增强样本成功地提高了分类模型的性能,这些模型的性能大大优于先前的非选择性方法,包括两种基于语言模型的大型技术。跨数据库实验进一步表明,与以前的方法相比,STA可以帮助分类器更好地推广到其他数据集。
translated by 谷歌翻译
考虑到过去几十年中开发的一长串异常检测算法,它们如何在(i)(i)不同级别的监督,(ii)不同类型的异常以及(iii)嘈杂和损坏的数据方面执行?在这项工作中,我们通过(据我们所知)在55个名为Adbench的55个基准数据集中使用30个算法来回答这些关键问题。我们的广泛实验(总共93,654)确定了对监督和异常类型的作用的有意义的见解,并解锁了研究人员在算法选择和设计中的未来方向。借助Adbench,研究人员可以轻松地对数据集(包括我们从自然语言和计算机视觉域的贡献)对现有基线的新提出的方法进行全面和公平的评估。为了促进可访问性和可重复性,我们完全开源的Adbench和相应的结果。
translated by 谷歌翻译
In this paper, we propose a robust 3D detector, named Cross Modal Transformer (CMT), for end-to-end 3D multi-modal detection. Without explicit view transformation, CMT takes the image and point clouds tokens as inputs and directly outputs accurate 3D bounding boxes. The spatial alignment of multi-modal tokens is performed implicitly, by encoding the 3D points into multi-modal features. The core design of CMT is quite simple while its performance is impressive. CMT obtains 73.0% NDS on nuScenes benchmark. Moreover, CMT has a strong robustness even if the LiDAR is missing. Code will be released at https://github.com/junjie18/CMT.
translated by 谷歌翻译
Dataset distillation has emerged as a prominent technique to improve data efficiency when training machine learning models. It encapsulates the knowledge from a large dataset into a smaller synthetic dataset. A model trained on this smaller distilled dataset can attain comparable performance to a model trained on the original training dataset. However, the existing dataset distillation techniques mainly aim at achieving the best trade-off between resource usage efficiency and model utility. The security risks stemming from them have not been explored. This study performs the first backdoor attack against the models trained on the data distilled by dataset distillation models in the image domain. Concretely, we inject triggers into the synthetic data during the distillation procedure rather than during the model training stage, where all previous attacks are performed. We propose two types of backdoor attacks, namely NAIVEATTACK and DOORPING. NAIVEATTACK simply adds triggers to the raw data at the initial distillation phase, while DOORPING iteratively updates the triggers during the entire distillation procedure. We conduct extensive evaluations on multiple datasets, architectures, and dataset distillation techniques. Empirical evaluation shows that NAIVEATTACK achieves decent attack success rate (ASR) scores in some cases, while DOORPING reaches higher ASR scores (close to 1.0) in all cases. Furthermore, we conduct a comprehensive ablation study to analyze the factors that may affect the attack performance. Finally, we evaluate multiple defense mechanisms against our backdoor attacks and show that our attacks can practically circumvent these defense mechanisms.
translated by 谷歌翻译
Few Shot Instance Segmentation (FSIS) requires models to detect and segment novel classes with limited several support examples. In this work, we explore a simple yet unified solution for FSIS as well as its incremental variants, and introduce a new framework named Reference Twice (RefT) to fully explore the relationship between support/query features based on a Transformer-like framework. Our key insights are two folds: Firstly, with the aid of support masks, we can generate dynamic class centers more appropriately to re-weight query features. Secondly, we find that support object queries have already encoded key factors after base training. In this way, the query features can be enhanced twice from two aspects, i.e., feature-level and instance-level. In particular, we firstly design a mask-based dynamic weighting module to enhance support features and then propose to link object queries for better calibration via cross-attention. After the above steps, the novel classes can be improved significantly over our strong baseline. Additionally, our new framework can be easily extended to incremental FSIS with minor modification. When benchmarking results on the COCO dataset for FSIS, gFSIS, and iFSIS settings, our method achieves a competitive performance compared to existing approaches across different shots, e.g., we boost nAP by noticeable +8.2/+9.4 over the current state-of-the-art FSIS method for 10/30-shot. We further demonstrate the superiority of our approach on Few Shot Object Detection. Code and model will be available.
translated by 谷歌翻译